在高度波动的加密货币市场中,设计盈利和可靠的交易策略是具有挑战性的。现有作品应用了深厚的增强学习方法,并在回测的乐观上报告了利润增加,这可能会因过度拟合而造成的假积极问题。在本文中,我们提出了一种实用方法,以解决使用深度强化学习的重新测试,以解决加密货币交易。首先,我们将过度拟合的检测作为假设检测。然后,我们训练DRL代理,估计过度拟合的可能性,并拒绝过度拟合的代理商,从而增加了良好交易绩效的机会。最后,在从05/01/2022到06/27/2022(在此期间加密货币市场崩溃两次)的测试期间的10次加密货币中,我们表明,过度拟合的深度强化学习剂的尖锐比率较高。更多过度合适的代理商,同等的权重策略和标准普尔DBM指数(市场基准),对可能部署到真实市场的可能性充满信心。
translated by 谷歌翻译
深增强学习(DRL)最近在建立金融市场模拟器方面表现出巨大的潜力。然而,由于现实世界市场的高度复杂和动态性质,原始的历史金融数据往往涉及大噪音,可能无法反映市场的未来,降低了基于DRL的市场模拟器的保真度。此外,基于DRL的市场模拟器的准确性严重依赖于众多和多样化的DRL代理,这增加了对市场环境宇宙的需求,并对模拟速度提出挑战。在本文中,我们介绍了一个Finrl-Meta框架,为数据驱动的金融强化学习建立了一个市场环境的宇宙。首先,Finrl-Meta将财务数据处理分开,从基于DRL的策略的设计管道分开,并为财务大数据提供开源数据工程工具。其次,Finrl-Meta为各种交易任务提供了数百个市场环境。第三,Finrl-Meta通过利用数千个GPU核心,可以实现多加工模拟和培训。我们的代码可在https://github.com/ai4finance-foundation/finrl-meta上使用。
translated by 谷歌翻译
深度加强学习(DRL)在游戏和机器人控制等应用中彻底改变了学习和致动。数据收集的成本,即从代理环境互动产生转变,仍然是在复杂的现实问题中更广泛的DRL采用的重大挑战。在GPU云平台上培训DRL代理的云原生范例是一个有前途的解决方案。在本文中,我们为云天然深层加固学习提供了一种可扩展和弹性图书馆优雅的钢茶,其有效地支持数百万GPU核心,以便在多个层面进行大规模平行的训练。在一个高级别的优雅普罗拉科尔使用基于锦标赛的集合计划,以协调数百个甚至数千个GPU的培训过程,安排排行榜与培训池与数百个豆荚之间的相互作用。在低级,每个POD通过在单个GPU中充分利用近7,000个GPU CUDA核心,模拟了代理环境的交互。我们的优雅RL-Podracer Library通过遵循集装箱,微服务和MLOPS的开发原则,具有高可扩展性,弹性和可访问性。使用NVIDIA DGX SuperPod Cloud,我们对机器人和股票交易中的各种任务进行了广泛的实验,并表明Elegitrl-Podracer大大优于Rllib。我们的代码可在GitHub上获得。
translated by 谷歌翻译
深度加强学习(DRL)已广泛研究了投资组合管理任务。然而,由于深神经网络的黑匣子性质,了解基于DRL的交易策略是挑战性的。在本文中,我们提出了一种实证方法来解释组合管理任务的DRL代理商的策略。首先,我们在后威尔作为参考模型中使用线性模型,通过假设了解远见的实际库存回报来找到最佳的投资组合权重。特别地,我们使用后可以的线性模型的系数作为参考特征权重。其次,对于DRL代理商,我们使用集成梯度来定义特征权重,这是线性回归模型下的奖励和特征之间的系数。第三,我们在两种情况下研究预测力,单步预测和多步预测。特别地,我们通过计算DRL代理的特征权重和参考特征权重之间的线性相关性来量化预测力,并且类似于机器学习方法。最后,我们在01/01/2009年至09/01/201期间评估了Dow Jones 30 Constinuent Stocks的投资组合管理任务。我们的方法凭经验揭示了DRL代理表现出比机器学习方法更强的多步测预测能力。
translated by 谷歌翻译
股票交易策略在投资公司中起着至关重要的作用。但是,在复杂而动态的股票市场中获得最佳策略是一项挑战。我们探索了深入学习的潜力,以优化股票交易策略,从而最大程度地提高投资回报。选择30个股票作为我们的贸易股票,其日用价格被用作培训和交易市场环境。我们培训一个深入的增强学习代理,并获得自适应交易策略。评估了代理商的绩效,并将其与道琼斯工业平均水平和传统的最小变化投资组合分配策略进行了比较。拟议的深钢筋学习方法显示出在夏普比和累积回报方面都优于两个基准。
translated by 谷歌翻译
In this paper, we propose a robust 3D detector, named Cross Modal Transformer (CMT), for end-to-end 3D multi-modal detection. Without explicit view transformation, CMT takes the image and point clouds tokens as inputs and directly outputs accurate 3D bounding boxes. The spatial alignment of multi-modal tokens is performed implicitly, by encoding the 3D points into multi-modal features. The core design of CMT is quite simple while its performance is impressive. CMT obtains 73.0% NDS on nuScenes benchmark. Moreover, CMT has a strong robustness even if the LiDAR is missing. Code will be released at https://github.com/junjie18/CMT.
translated by 谷歌翻译
Dataset distillation has emerged as a prominent technique to improve data efficiency when training machine learning models. It encapsulates the knowledge from a large dataset into a smaller synthetic dataset. A model trained on this smaller distilled dataset can attain comparable performance to a model trained on the original training dataset. However, the existing dataset distillation techniques mainly aim at achieving the best trade-off between resource usage efficiency and model utility. The security risks stemming from them have not been explored. This study performs the first backdoor attack against the models trained on the data distilled by dataset distillation models in the image domain. Concretely, we inject triggers into the synthetic data during the distillation procedure rather than during the model training stage, where all previous attacks are performed. We propose two types of backdoor attacks, namely NAIVEATTACK and DOORPING. NAIVEATTACK simply adds triggers to the raw data at the initial distillation phase, while DOORPING iteratively updates the triggers during the entire distillation procedure. We conduct extensive evaluations on multiple datasets, architectures, and dataset distillation techniques. Empirical evaluation shows that NAIVEATTACK achieves decent attack success rate (ASR) scores in some cases, while DOORPING reaches higher ASR scores (close to 1.0) in all cases. Furthermore, we conduct a comprehensive ablation study to analyze the factors that may affect the attack performance. Finally, we evaluate multiple defense mechanisms against our backdoor attacks and show that our attacks can practically circumvent these defense mechanisms.
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
This paper focuses on designing efficient models with low parameters and FLOPs for dense predictions. Even though CNN-based lightweight methods have achieved stunning results after years of research, trading-off model accuracy and constrained resources still need further improvements. This work rethinks the essential unity of efficient Inverted Residual Block in MobileNetv2 and effective Transformer in ViT, inductively abstracting a general concept of Meta-Mobile Block, and we argue that the specific instantiation is very important to model performance though sharing the same framework. Motivated by this phenomenon, we deduce a simple yet efficient modern \textbf{I}nverted \textbf{R}esidual \textbf{M}obile \textbf{B}lock (iRMB) for mobile applications, which absorbs CNN-like efficiency to model short-distance dependency and Transformer-like dynamic modeling capability to learn long-distance interactions. Furthermore, we design a ResNet-like 4-phase \textbf{E}fficient \textbf{MO}del (EMO) based only on a series of iRMBs for dense applications. Massive experiments on ImageNet-1K, COCO2017, and ADE20K benchmarks demonstrate the superiority of our EMO over state-of-the-art methods, \eg, our EMO-1M/2M/5M achieve 71.5, 75.1, and 78.4 Top-1 that surpass \textbf{SoTA} CNN-/Transformer-based models, while trading-off the model accuracy and efficiency well.
translated by 谷歌翻译
Benefiting from the intrinsic supervision information exploitation capability, contrastive learning has achieved promising performance in the field of deep graph clustering recently. However, we observe that two drawbacks of the positive and negative sample construction mechanisms limit the performance of existing algorithms from further improvement. 1) The quality of positive samples heavily depends on the carefully designed data augmentations, while inappropriate data augmentations would easily lead to the semantic drift and indiscriminative positive samples. 2) The constructed negative samples are not reliable for ignoring important clustering information. To solve these problems, we propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC) by mining the intrinsic supervision information in the high-confidence clustering results. Specifically, instead of conducting complex node or edge perturbation, we construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks. Then, guided by the high-confidence clustering information, we carefully select and construct the positive samples from the same high-confidence cluster in two views. Moreover, to construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples, thus improving the discriminative capability and reliability of the constructed sample pairs. Lastly, we design an objective function to pull close the samples from the same cluster while pushing away those from other clusters by maximizing and minimizing the cross-view cosine similarity between positive and negative samples. Extensive experimental results on six datasets demonstrate the effectiveness of CCGC compared with the existing state-of-the-art algorithms.
translated by 谷歌翻译